-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove wait in all_to_all_single custom op #2646
Conversation
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary: # context * remove the `torch.ops._c10d_functional.wait_tensor` call in all_to_all_single. * use `autograd.function` implementation to create a `AllToAllSingle` function * There is a wait_tensor after the all_to_all_single call: pytorch/pytorch#143533 Differential Revision: D64666999
872080e
to
4529733
Compare
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary: # context * remove the `torch.ops._c10d_functional.wait_tensor` call in all_to_all_single. * use `autograd.function` implementation to create a `AllToAllSingle` function * There is a wait_tensor after the all_to_all_single call: pytorch/pytorch#143533 Differential Revision: D64666999
4529733
to
c7a0498
Compare
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary: # context * remove the `torch.ops._c10d_functional.wait_tensor` call in all_to_all_single. * use `autograd.function` implementation to create a `AllToAllSingle` function * There is a wait_tensor after the all_to_all_single call: pytorch/pytorch#143533 Differential Revision: D64666999
c7a0498
to
dd1fcdb
Compare
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary: # context * remove the `torch.ops._c10d_functional.wait_tensor` call in all_to_all_single. * use `autograd.function` implementation to create a `AllToAllSingle` function * There is a wait_tensor after the all_to_all_single call: pytorch/pytorch#143533 Reviewed By: IvanKobzarev Differential Revision: D64666999
dd1fcdb
to
ac892db
Compare
This pull request was exported from Phabricator. Differential Revision: D64666999 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary: # context * remove the `torch.ops._c10d_functional.wait_tensor` call in all_to_all_single. * use `autograd.function` implementation to create a `AllToAllSingle` function * There is a wait_tensor after the all_to_all_single call: pytorch/pytorch#143533 Reviewed By: IvanKobzarev Differential Revision: D64666999
ac892db
to
d041de1
Compare
This pull request was exported from Phabricator. Differential Revision: D64666999 |
Summary:
context
torch.ops._c10d_functional.wait_tensor
call in all_to_all_single.autograd.function
implementation to create aAllToAllSingle
functionDifferential Revision: D64666999